Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [1]:
#data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x7f6e301fdfd0>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x7f6e3013fb00>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    input_real = tf.placeholder(tf.float32,(None, image_width, image_height, image_channels))
    input_z = tf.placeholder(tf.float32, (None, z_dim))

    lr = tf.placeholder(tf.float32)

    return input_real, input_z, lr


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).

In [6]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    alpha = 0.2
    with tf.variable_scope('discriminator', reuse=reuse):
        # Input layer is 28x28ximage_channels
        x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='valid')
        relu1 = tf.maximum(alpha * x1, x1)
        # 12x12x64
        x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
        bn2 = tf.layers.batch_normalization(x2, training=True)
        relu2 = tf.maximum(alpha * bn2, bn2)
        # 6x6x128
        x3 = tf.layers.conv2d(relu2, 256, 3, strides=1, padding='same')
        bn3 = tf.layers.batch_normalization(x3, training=True)
        relu3 = tf.maximum(alpha * bn3, bn3)
        # 6x6x256
        # Flatten it
        flat = tf.reshape(relu3, (-1, 6*6*256))
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)
        return out, logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [7]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    alpha = 0.2
    channel_factor = 4
    with tf.variable_scope('generator', reuse=not is_train):
        # First fully connected layer
        x1 = tf.layers.dense(z, 7*7*channel_factor*100)
        # Reshape it to start the convolutional stack
        x1 = tf.reshape(x1, (-1, 7, 7, channel_factor*100))
        x1 = tf.layers.batch_normalization(x1, training=is_train)
        x1 = tf.maximum(alpha * x1, x1)
        # 7x7x400
        x2 = tf.layers.conv2d_transpose(x1, channel_factor*50, 5, strides=2, padding='same')
        x2 = tf.layers.batch_normalization(x2, training=is_train)
        x2 = tf.maximum(alpha * x2, x2)
        # 14x14x200
        x3 = tf.layers.conv2d_transpose(x2, channel_factor*25, 5, strides=2, padding='same')
        x3 = tf.layers.batch_normalization(x3, training=is_train)
        x3 = tf.maximum(alpha * x3, x3)
        # 28x28x100
        x4 = tf.layers.conv2d(x3, channel_factor*8, 3, strides=1, padding='same')
        x4 = tf.layers.batch_normalization(x4, training=is_train)
        x4 = tf.maximum(alpha * x4, x4)
        # 28x28x32        
        # Output layer        
        logits = tf.layers.conv2d(x4, out_channel_dim, 5, strides=1, padding='same')
        # 28x28x3
                
        return tf.tanh(logits)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [8]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    smooth = 0.1 
    g_out = generator(input_z, out_channel_dim=out_channel_dim)
    d_out, d_real_logits = discriminator(input_real)
    d_z, d_z_logits = discriminator(g_out, True)
    d_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_real_logits, labels=tf.ones_like(d_out)*(1-smooth)))
    d_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.zeros_like(d_z)))
    g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.ones_like(d_z)))

    d_loss = d_real_loss + d_fake_loss                                                   

    return d_loss, g_loss

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [9]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    train_vars = tf.trainable_variables()
    d_vars = [v for v in train_vars if v.name.startswith('discriminator')]
    g_vars = [v for v in train_vars if v.name.startswith('generator')]
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=d_loss, var_list=d_vars)
        g_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=g_loss, var_list=g_vars)    
    return d_opt, g_opt

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [11]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    out_channel_dim = data_shape[3]
    input_real, input_z, lr = model_inputs(data_shape[1], data_shape[2], out_channel_dim, z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, out_channel_dim)
    d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    i_show = 0
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                z_images = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                feed_data = {input_real: batch_images, input_z: z_images, lr:learning_rate}
                # Run twice the gen optimizer to get quick results
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(d_opt, feed_dict=feed_data)
                
                if i_show % 20 == 0:
                    dis_loss = sess.run(d_loss, feed_dict=feed_data)
                    gen_loss = sess.run(g_loss, feed_dict=feed_data)
                    print('Epoch %s/%s, Step %s, dis_loss: %s, gen_loss %s' % (epoch_i+1, epoch_count, i_show, dis_loss, gen_loss))
                if i_show % 100 == 0:
                    show_generator_output(sess, 64, input_z, out_channel_dim, data_image_mode)
                i_show += 1
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [12]:
batch_size = 32
z_dim = 100
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
Epoch 1/2, Step 0, dis_loss: 0.421984, gen_loss 4.13361
Epoch 1/2, Step 20, dis_loss: 1.83137, gen_loss 4.78835
Epoch 1/2, Step 40, dis_loss: 1.33906, gen_loss 0.661923
Epoch 1/2, Step 60, dis_loss: 1.06973, gen_loss 0.774443
Epoch 1/2, Step 80, dis_loss: 1.50347, gen_loss 0.485997
Epoch 1/2, Step 100, dis_loss: 1.2174, gen_loss 1.2203
Epoch 1/2, Step 120, dis_loss: 1.13367, gen_loss 1.34946
Epoch 1/2, Step 140, dis_loss: 1.12175, gen_loss 1.14715
Epoch 1/2, Step 160, dis_loss: 1.0871, gen_loss 1.05493
Epoch 1/2, Step 180, dis_loss: 1.16436, gen_loss 2.05855
Epoch 1/2, Step 200, dis_loss: 1.17397, gen_loss 0.732948
Epoch 1/2, Step 220, dis_loss: 1.06999, gen_loss 0.850568
Epoch 1/2, Step 240, dis_loss: 1.19025, gen_loss 1.86588
Epoch 1/2, Step 260, dis_loss: 1.10917, gen_loss 0.796167
Epoch 1/2, Step 280, dis_loss: 1.10323, gen_loss 1.32648
Epoch 1/2, Step 300, dis_loss: 1.16413, gen_loss 1.24718
Epoch 1/2, Step 320, dis_loss: 1.0492, gen_loss 0.813661
Epoch 1/2, Step 340, dis_loss: 1.03724, gen_loss 1.72066
Epoch 1/2, Step 360, dis_loss: 0.93482, gen_loss 1.00841
Epoch 1/2, Step 380, dis_loss: 1.05845, gen_loss 0.776085
Epoch 1/2, Step 400, dis_loss: 1.10554, gen_loss 2.50432
Epoch 1/2, Step 420, dis_loss: 1.1009, gen_loss 0.719627
Epoch 1/2, Step 440, dis_loss: 0.87063, gen_loss 1.16968
Epoch 1/2, Step 460, dis_loss: 1.17621, gen_loss 2.02712
Epoch 1/2, Step 480, dis_loss: 1.19418, gen_loss 1.70328
Epoch 1/2, Step 500, dis_loss: 1.06007, gen_loss 1.36729
Epoch 1/2, Step 520, dis_loss: 0.62667, gen_loss 1.59996
Epoch 1/2, Step 540, dis_loss: 0.717545, gen_loss 1.21798
Epoch 1/2, Step 560, dis_loss: 0.970199, gen_loss 0.846039
Epoch 1/2, Step 580, dis_loss: 1.23511, gen_loss 3.1589
Epoch 1/2, Step 600, dis_loss: 1.14588, gen_loss 0.746891
Epoch 1/2, Step 620, dis_loss: 1.12074, gen_loss 1.97142
Epoch 1/2, Step 640, dis_loss: 1.21673, gen_loss 2.39105
Epoch 1/2, Step 660, dis_loss: 1.01291, gen_loss 0.823274
Epoch 1/2, Step 680, dis_loss: 0.705859, gen_loss 2.17677
Epoch 1/2, Step 700, dis_loss: 1.03888, gen_loss 1.22607
Epoch 1/2, Step 720, dis_loss: 1.03209, gen_loss 1.51753
Epoch 1/2, Step 740, dis_loss: 1.03506, gen_loss 1.83376
Epoch 1/2, Step 760, dis_loss: 1.05952, gen_loss 1.39788
Epoch 1/2, Step 780, dis_loss: 0.978684, gen_loss 1.93605
Epoch 1/2, Step 800, dis_loss: 0.959527, gen_loss 1.50964
Epoch 1/2, Step 820, dis_loss: 1.13774, gen_loss 0.950888
Epoch 1/2, Step 840, dis_loss: 1.05978, gen_loss 1.13223
Epoch 1/2, Step 860, dis_loss: 1.02339, gen_loss 1.57003
Epoch 1/2, Step 880, dis_loss: 1.06237, gen_loss 0.933206
Epoch 1/2, Step 900, dis_loss: 0.936247, gen_loss 1.55257
Epoch 1/2, Step 920, dis_loss: 1.02242, gen_loss 0.972511
Epoch 1/2, Step 940, dis_loss: 0.96384, gen_loss 1.05904
Epoch 1/2, Step 960, dis_loss: 1.07798, gen_loss 1.50356
Epoch 1/2, Step 980, dis_loss: 1.09495, gen_loss 1.5877
Epoch 1/2, Step 1000, dis_loss: 0.980315, gen_loss 2.34544
Epoch 1/2, Step 1020, dis_loss: 1.05038, gen_loss 1.09425
Epoch 1/2, Step 1040, dis_loss: 1.06419, gen_loss 1.30956
Epoch 1/2, Step 1060, dis_loss: 1.08323, gen_loss 0.912595
Epoch 1/2, Step 1080, dis_loss: 1.02896, gen_loss 1.37719
Epoch 1/2, Step 1100, dis_loss: 1.09503, gen_loss 1.49354
Epoch 1/2, Step 1120, dis_loss: 1.05959, gen_loss 1.44431
Epoch 1/2, Step 1140, dis_loss: 1.04629, gen_loss 1.32095
Epoch 1/2, Step 1160, dis_loss: 1.12237, gen_loss 1.45996
Epoch 1/2, Step 1180, dis_loss: 1.11391, gen_loss 0.920685
Epoch 1/2, Step 1200, dis_loss: 1.2652, gen_loss 0.758453
Epoch 1/2, Step 1220, dis_loss: 1.12963, gen_loss 1.3456
Epoch 1/2, Step 1240, dis_loss: 1.40913, gen_loss 0.4749
Epoch 1/2, Step 1260, dis_loss: 1.23877, gen_loss 1.15527
Epoch 1/2, Step 1280, dis_loss: 1.2152, gen_loss 0.748171
Epoch 1/2, Step 1300, dis_loss: 1.01773, gen_loss 1.33859
Epoch 1/2, Step 1320, dis_loss: 1.10242, gen_loss 0.810852
Epoch 1/2, Step 1340, dis_loss: 1.16716, gen_loss 0.764041
Epoch 1/2, Step 1360, dis_loss: 1.15114, gen_loss 1.39584
Epoch 1/2, Step 1380, dis_loss: 1.18995, gen_loss 1.35243
Epoch 1/2, Step 1400, dis_loss: 1.23879, gen_loss 1.48827
Epoch 1/2, Step 1420, dis_loss: 1.21446, gen_loss 0.761234
Epoch 1/2, Step 1440, dis_loss: 1.27059, gen_loss 0.612926
Epoch 1/2, Step 1460, dis_loss: 1.21222, gen_loss 1.81292
Epoch 1/2, Step 1480, dis_loss: 1.0743, gen_loss 0.882896
Epoch 1/2, Step 1500, dis_loss: 1.06217, gen_loss 1.25353
Epoch 1/2, Step 1520, dis_loss: 1.18149, gen_loss 1.26447
Epoch 1/2, Step 1540, dis_loss: 1.25744, gen_loss 0.680314
Epoch 1/2, Step 1560, dis_loss: 1.25619, gen_loss 0.713669
Epoch 1/2, Step 1580, dis_loss: 1.07745, gen_loss 0.892221
Epoch 1/2, Step 1600, dis_loss: 1.18437, gen_loss 1.12998
Epoch 1/2, Step 1620, dis_loss: 1.25837, gen_loss 1.11858
Epoch 1/2, Step 1640, dis_loss: 1.12598, gen_loss 1.08921
Epoch 1/2, Step 1660, dis_loss: 1.14658, gen_loss 1.11972
Epoch 1/2, Step 1680, dis_loss: 1.35071, gen_loss 0.534604
Epoch 1/2, Step 1700, dis_loss: 1.18781, gen_loss 0.840706
Epoch 1/2, Step 1720, dis_loss: 1.06281, gen_loss 1.12176
Epoch 1/2, Step 1740, dis_loss: 1.06015, gen_loss 1.4528
Epoch 1/2, Step 1760, dis_loss: 1.13781, gen_loss 1.11419
Epoch 1/2, Step 1780, dis_loss: 1.14272, gen_loss 1.18782
Epoch 1/2, Step 1800, dis_loss: 1.14252, gen_loss 1.39681
Epoch 1/2, Step 1820, dis_loss: 1.27948, gen_loss 1.54713
Epoch 1/2, Step 1840, dis_loss: 1.15502, gen_loss 1.32916
Epoch 1/2, Step 1860, dis_loss: 1.233, gen_loss 0.649438
Epoch 2/2, Step 1880, dis_loss: 1.08622, gen_loss 1.037
Epoch 2/2, Step 1900, dis_loss: 1.18148, gen_loss 0.780559
Epoch 2/2, Step 1920, dis_loss: 1.23207, gen_loss 0.76094
Epoch 2/2, Step 1940, dis_loss: 1.19029, gen_loss 0.73508
Epoch 2/2, Step 1960, dis_loss: 1.21944, gen_loss 0.826236
Epoch 2/2, Step 1980, dis_loss: 1.18859, gen_loss 0.924875
Epoch 2/2, Step 2000, dis_loss: 1.21094, gen_loss 1.16232
Epoch 2/2, Step 2020, dis_loss: 1.33533, gen_loss 0.556323
Epoch 2/2, Step 2040, dis_loss: 1.17815, gen_loss 1.11648
Epoch 2/2, Step 2060, dis_loss: 1.14173, gen_loss 1.00898
Epoch 2/2, Step 2080, dis_loss: 1.24252, gen_loss 0.692386
Epoch 2/2, Step 2100, dis_loss: 1.16392, gen_loss 1.22767
Epoch 2/2, Step 2120, dis_loss: 1.33093, gen_loss 1.19559
Epoch 2/2, Step 2140, dis_loss: 1.33956, gen_loss 1.44347
Epoch 2/2, Step 2160, dis_loss: 1.1947, gen_loss 1.31802
Epoch 2/2, Step 2180, dis_loss: 1.2778, gen_loss 1.54743
Epoch 2/2, Step 2200, dis_loss: 1.3214, gen_loss 1.08403
Epoch 2/2, Step 2220, dis_loss: 1.29103, gen_loss 1.22498
Epoch 2/2, Step 2240, dis_loss: 1.22725, gen_loss 0.900007
Epoch 2/2, Step 2260, dis_loss: 1.28062, gen_loss 0.975779
Epoch 2/2, Step 2280, dis_loss: 1.21572, gen_loss 0.965036
Epoch 2/2, Step 2300, dis_loss: 1.25115, gen_loss 1.20532
Epoch 2/2, Step 2320, dis_loss: 1.17866, gen_loss 1.3241
Epoch 2/2, Step 2340, dis_loss: 1.21608, gen_loss 0.906599
Epoch 2/2, Step 2360, dis_loss: 1.17202, gen_loss 1.1585
Epoch 2/2, Step 2380, dis_loss: 1.15462, gen_loss 0.988766
Epoch 2/2, Step 2400, dis_loss: 1.24154, gen_loss 1.10384
Epoch 2/2, Step 2420, dis_loss: 1.27538, gen_loss 0.903791
Epoch 2/2, Step 2440, dis_loss: 1.24796, gen_loss 0.95628
Epoch 2/2, Step 2460, dis_loss: 1.18191, gen_loss 0.952052
Epoch 2/2, Step 2480, dis_loss: 1.24278, gen_loss 0.708316
Epoch 2/2, Step 2500, dis_loss: 1.1993, gen_loss 1.0373
Epoch 2/2, Step 2520, dis_loss: 1.12138, gen_loss 1.08902
Epoch 2/2, Step 2540, dis_loss: 1.25877, gen_loss 1.06885
Epoch 2/2, Step 2560, dis_loss: 1.22144, gen_loss 0.840789
Epoch 2/2, Step 2580, dis_loss: 1.39438, gen_loss 0.575516
Epoch 2/2, Step 2600, dis_loss: 1.2165, gen_loss 0.992156
Epoch 2/2, Step 2620, dis_loss: 1.28983, gen_loss 1.00093
Epoch 2/2, Step 2640, dis_loss: 1.24377, gen_loss 0.845812
Epoch 2/2, Step 2660, dis_loss: 1.25865, gen_loss 0.847224
Epoch 2/2, Step 2680, dis_loss: 1.25544, gen_loss 0.96441
Epoch 2/2, Step 2700, dis_loss: 1.22381, gen_loss 0.840159
Epoch 2/2, Step 2720, dis_loss: 1.25987, gen_loss 1.05764
Epoch 2/2, Step 2740, dis_loss: 1.33394, gen_loss 0.99277
Epoch 2/2, Step 2760, dis_loss: 1.34141, gen_loss 0.672188
Epoch 2/2, Step 2780, dis_loss: 1.23251, gen_loss 0.878994
Epoch 2/2, Step 2800, dis_loss: 1.22313, gen_loss 0.743052
Epoch 2/2, Step 2820, dis_loss: 1.26692, gen_loss 0.71185
Epoch 2/2, Step 2840, dis_loss: 1.24592, gen_loss 1.15678
Epoch 2/2, Step 2860, dis_loss: 1.2512, gen_loss 0.816461
Epoch 2/2, Step 2880, dis_loss: 1.31688, gen_loss 1.02278
Epoch 2/2, Step 2900, dis_loss: 1.111, gen_loss 0.906546
Epoch 2/2, Step 2920, dis_loss: 1.2727, gen_loss 1.07596
Epoch 2/2, Step 2940, dis_loss: 1.24809, gen_loss 1.01085
Epoch 2/2, Step 2960, dis_loss: 1.26908, gen_loss 0.854861
Epoch 2/2, Step 2980, dis_loss: 1.33875, gen_loss 0.801586
Epoch 2/2, Step 3000, dis_loss: 1.24608, gen_loss 1.07172
Epoch 2/2, Step 3020, dis_loss: 1.28756, gen_loss 1.10761
Epoch 2/2, Step 3040, dis_loss: 1.37315, gen_loss 1.27577
Epoch 2/2, Step 3060, dis_loss: 1.30849, gen_loss 0.726337
Epoch 2/2, Step 3080, dis_loss: 1.3699, gen_loss 0.656347
Epoch 2/2, Step 3100, dis_loss: 1.271, gen_loss 0.808468
Epoch 2/2, Step 3120, dis_loss: 1.29947, gen_loss 0.705067
Epoch 2/2, Step 3140, dis_loss: 1.30972, gen_loss 1.05097
Epoch 2/2, Step 3160, dis_loss: 1.2647, gen_loss 0.934016
Epoch 2/2, Step 3180, dis_loss: 1.28235, gen_loss 0.882791
Epoch 2/2, Step 3200, dis_loss: 1.29075, gen_loss 0.783826
Epoch 2/2, Step 3220, dis_loss: 1.29069, gen_loss 1.05543
Epoch 2/2, Step 3240, dis_loss: 1.27439, gen_loss 0.924
Epoch 2/2, Step 3260, dis_loss: 1.22903, gen_loss 1.0164
Epoch 2/2, Step 3280, dis_loss: 1.3269, gen_loss 0.796814
Epoch 2/2, Step 3300, dis_loss: 1.2748, gen_loss 0.816421
Epoch 2/2, Step 3320, dis_loss: 1.31809, gen_loss 0.877289
Epoch 2/2, Step 3340, dis_loss: 1.29644, gen_loss 0.960406
Epoch 2/2, Step 3360, dis_loss: 1.27547, gen_loss 0.983352
Epoch 2/2, Step 3380, dis_loss: 1.26076, gen_loss 0.839819
Epoch 2/2, Step 3400, dis_loss: 1.32453, gen_loss 1.10332
Epoch 2/2, Step 3420, dis_loss: 1.32299, gen_loss 0.691887
Epoch 2/2, Step 3440, dis_loss: 1.28419, gen_loss 0.923368
Epoch 2/2, Step 3460, dis_loss: 1.34884, gen_loss 0.780003
Epoch 2/2, Step 3480, dis_loss: 1.30444, gen_loss 0.826875
Epoch 2/2, Step 3500, dis_loss: 1.37459, gen_loss 1.25438
Epoch 2/2, Step 3520, dis_loss: 1.3645, gen_loss 1.16932
Epoch 2/2, Step 3540, dis_loss: 1.24118, gen_loss 1.04746
Epoch 2/2, Step 3560, dis_loss: 1.29816, gen_loss 0.728103
Epoch 2/2, Step 3580, dis_loss: 1.29478, gen_loss 0.902599
Epoch 2/2, Step 3600, dis_loss: 1.35328, gen_loss 0.652552
Epoch 2/2, Step 3620, dis_loss: 1.25305, gen_loss 0.909927
Epoch 2/2, Step 3640, dis_loss: 1.32195, gen_loss 1.06658
Epoch 2/2, Step 3660, dis_loss: 1.26691, gen_loss 0.894799
Epoch 2/2, Step 3680, dis_loss: 1.32265, gen_loss 1.10954
Epoch 2/2, Step 3700, dis_loss: 1.29261, gen_loss 0.862094
Epoch 2/2, Step 3720, dis_loss: 1.40272, gen_loss 1.39523
Epoch 2/2, Step 3740, dis_loss: 1.34543, gen_loss 0.794318
In [13]:
batch_size = 32
z_dim = 200
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
Epoch 1/1, Step 0, dis_loss: 1.97862, gen_loss 0.406612
Epoch 1/1, Step 20, dis_loss: 1.3087, gen_loss 1.95318
Epoch 1/1, Step 40, dis_loss: 0.841352, gen_loss 1.52878
Epoch 1/1, Step 60, dis_loss: 1.70189, gen_loss 5.39351
Epoch 1/1, Step 80, dis_loss: 0.605072, gen_loss 6.00656
Epoch 1/1, Step 100, dis_loss: 1.31545, gen_loss 1.09928
Epoch 1/1, Step 120, dis_loss: 1.40592, gen_loss 0.800308
Epoch 1/1, Step 140, dis_loss: 1.41023, gen_loss 0.833721
Epoch 1/1, Step 160, dis_loss: 1.42533, gen_loss 0.803867
Epoch 1/1, Step 180, dis_loss: 1.53003, gen_loss 0.690466
Epoch 1/1, Step 200, dis_loss: 1.38473, gen_loss 0.838377
Epoch 1/1, Step 220, dis_loss: 1.3281, gen_loss 0.854969
Epoch 1/1, Step 240, dis_loss: 1.37077, gen_loss 0.792359
Epoch 1/1, Step 260, dis_loss: 1.42524, gen_loss 0.903738
Epoch 1/1, Step 280, dis_loss: 1.39394, gen_loss 0.777909
Epoch 1/1, Step 300, dis_loss: 1.40909, gen_loss 0.789316
Epoch 1/1, Step 320, dis_loss: 1.37138, gen_loss 0.69894
Epoch 1/1, Step 340, dis_loss: 1.38484, gen_loss 0.817781
Epoch 1/1, Step 360, dis_loss: 1.33663, gen_loss 0.820921
Epoch 1/1, Step 380, dis_loss: 1.42205, gen_loss 0.822649
Epoch 1/1, Step 400, dis_loss: 1.41993, gen_loss 0.86126
Epoch 1/1, Step 420, dis_loss: 1.36798, gen_loss 0.916056
Epoch 1/1, Step 440, dis_loss: 1.4102, gen_loss 0.802189
Epoch 1/1, Step 460, dis_loss: 1.37613, gen_loss 0.795263
Epoch 1/1, Step 480, dis_loss: 1.4218, gen_loss 0.989553
Epoch 1/1, Step 500, dis_loss: 1.34081, gen_loss 0.974845
Epoch 1/1, Step 520, dis_loss: 1.36705, gen_loss 0.871596
Epoch 1/1, Step 540, dis_loss: 1.50561, gen_loss 0.740431
Epoch 1/1, Step 560, dis_loss: 1.4048, gen_loss 0.777101
Epoch 1/1, Step 580, dis_loss: 1.41191, gen_loss 0.799391
Epoch 1/1, Step 600, dis_loss: 1.3639, gen_loss 1.02471
Epoch 1/1, Step 620, dis_loss: 1.38806, gen_loss 0.768329
Epoch 1/1, Step 640, dis_loss: 1.34848, gen_loss 0.945326
Epoch 1/1, Step 660, dis_loss: 1.37761, gen_loss 0.838012
Epoch 1/1, Step 680, dis_loss: 1.38443, gen_loss 0.88313
Epoch 1/1, Step 700, dis_loss: 1.36439, gen_loss 0.894979
Epoch 1/1, Step 720, dis_loss: 1.39982, gen_loss 0.765402
Epoch 1/1, Step 740, dis_loss: 1.36699, gen_loss 0.748079
Epoch 1/1, Step 760, dis_loss: 1.42006, gen_loss 0.748961
Epoch 1/1, Step 780, dis_loss: 1.31314, gen_loss 1.00697
Epoch 1/1, Step 800, dis_loss: 1.37299, gen_loss 0.992281
Epoch 1/1, Step 820, dis_loss: 1.36594, gen_loss 0.86578
Epoch 1/1, Step 840, dis_loss: 1.39556, gen_loss 0.838155
Epoch 1/1, Step 860, dis_loss: 1.36699, gen_loss 0.981978
Epoch 1/1, Step 880, dis_loss: 1.40416, gen_loss 0.87637
Epoch 1/1, Step 900, dis_loss: 1.34588, gen_loss 0.783869
Epoch 1/1, Step 920, dis_loss: 1.33864, gen_loss 0.841624
Epoch 1/1, Step 940, dis_loss: 1.39552, gen_loss 0.761739
Epoch 1/1, Step 960, dis_loss: 1.33959, gen_loss 0.867109
Epoch 1/1, Step 980, dis_loss: 1.38818, gen_loss 0.821386
Epoch 1/1, Step 1000, dis_loss: 1.3602, gen_loss 0.871881
Epoch 1/1, Step 1020, dis_loss: 1.38479, gen_loss 0.833941
Epoch 1/1, Step 1040, dis_loss: 1.37552, gen_loss 0.756399
Epoch 1/1, Step 1060, dis_loss: 1.39849, gen_loss 0.928545
Epoch 1/1, Step 1080, dis_loss: 1.37405, gen_loss 0.945572
Epoch 1/1, Step 1100, dis_loss: 1.35265, gen_loss 0.832122
Epoch 1/1, Step 1120, dis_loss: 1.35282, gen_loss 0.756587
Epoch 1/1, Step 1140, dis_loss: 1.39737, gen_loss 0.806189
Epoch 1/1, Step 1160, dis_loss: 1.37277, gen_loss 0.810115
Epoch 1/1, Step 1180, dis_loss: 1.36406, gen_loss 0.91931
Epoch 1/1, Step 1200, dis_loss: 1.35638, gen_loss 0.853218
Epoch 1/1, Step 1220, dis_loss: 1.3697, gen_loss 0.809091
Epoch 1/1, Step 1240, dis_loss: 1.34575, gen_loss 0.868028
Epoch 1/1, Step 1260, dis_loss: 1.34414, gen_loss 1.02713
Epoch 1/1, Step 1280, dis_loss: 1.34142, gen_loss 0.806596
Epoch 1/1, Step 1300, dis_loss: 1.45182, gen_loss 0.842193
Epoch 1/1, Step 1320, dis_loss: 1.36143, gen_loss 0.998352
Epoch 1/1, Step 1340, dis_loss: 1.37253, gen_loss 0.890945
Epoch 1/1, Step 1360, dis_loss: 1.29108, gen_loss 0.999021
Epoch 1/1, Step 1380, dis_loss: 1.35184, gen_loss 0.8665
Epoch 1/1, Step 1400, dis_loss: 1.41568, gen_loss 0.823058
Epoch 1/1, Step 1420, dis_loss: 1.35246, gen_loss 0.917729
Epoch 1/1, Step 1440, dis_loss: 1.36921, gen_loss 0.927948
Epoch 1/1, Step 1460, dis_loss: 1.40633, gen_loss 0.705186
Epoch 1/1, Step 1480, dis_loss: 1.39631, gen_loss 0.89628
Epoch 1/1, Step 1500, dis_loss: 1.39296, gen_loss 0.745463
Epoch 1/1, Step 1520, dis_loss: 1.31208, gen_loss 0.75183
Epoch 1/1, Step 1540, dis_loss: 1.36691, gen_loss 0.889191
Epoch 1/1, Step 1560, dis_loss: 1.33949, gen_loss 0.747866
Epoch 1/1, Step 1580, dis_loss: 1.39029, gen_loss 0.904631
Epoch 1/1, Step 1600, dis_loss: 1.40464, gen_loss 0.829899
Epoch 1/1, Step 1620, dis_loss: 1.39981, gen_loss 0.784571
Epoch 1/1, Step 1640, dis_loss: 1.40356, gen_loss 0.661173
Epoch 1/1, Step 1660, dis_loss: 1.37238, gen_loss 0.812971
Epoch 1/1, Step 1680, dis_loss: 1.36175, gen_loss 1.07224
Epoch 1/1, Step 1700, dis_loss: 1.35934, gen_loss 0.819472
Epoch 1/1, Step 1720, dis_loss: 1.32074, gen_loss 1.03906
Epoch 1/1, Step 1740, dis_loss: 1.39316, gen_loss 0.735469
Epoch 1/1, Step 1760, dis_loss: 1.38061, gen_loss 0.777719
Epoch 1/1, Step 1780, dis_loss: 1.37231, gen_loss 0.739477
Epoch 1/1, Step 1800, dis_loss: 1.36975, gen_loss 0.839912
Epoch 1/1, Step 1820, dis_loss: 1.34886, gen_loss 0.901559
Epoch 1/1, Step 1840, dis_loss: 1.36644, gen_loss 0.771907
Epoch 1/1, Step 1860, dis_loss: 1.33725, gen_loss 0.982668
Epoch 1/1, Step 1880, dis_loss: 1.37105, gen_loss 0.803177
Epoch 1/1, Step 1900, dis_loss: 1.39823, gen_loss 0.956969
Epoch 1/1, Step 1920, dis_loss: 1.3756, gen_loss 0.830964
Epoch 1/1, Step 1940, dis_loss: 1.36219, gen_loss 0.936039
Epoch 1/1, Step 1960, dis_loss: 1.35484, gen_loss 0.743035
Epoch 1/1, Step 1980, dis_loss: 1.37401, gen_loss 0.8685
Epoch 1/1, Step 2000, dis_loss: 1.35709, gen_loss 0.890938
Epoch 1/1, Step 2020, dis_loss: 1.38673, gen_loss 0.798728
Epoch 1/1, Step 2040, dis_loss: 1.32346, gen_loss 0.838207
Epoch 1/1, Step 2060, dis_loss: 1.38232, gen_loss 0.829166
Epoch 1/1, Step 2080, dis_loss: 1.38334, gen_loss 0.91777
Epoch 1/1, Step 2100, dis_loss: 1.39209, gen_loss 0.853906
Epoch 1/1, Step 2120, dis_loss: 1.37625, gen_loss 0.808932
Epoch 1/1, Step 2140, dis_loss: 1.35664, gen_loss 0.766762
Epoch 1/1, Step 2160, dis_loss: 1.32273, gen_loss 0.80543
Epoch 1/1, Step 2180, dis_loss: 1.34107, gen_loss 0.84304
Epoch 1/1, Step 2200, dis_loss: 1.24369, gen_loss 0.832861
Epoch 1/1, Step 2220, dis_loss: 1.10672, gen_loss 0.96053
Epoch 1/1, Step 2240, dis_loss: 0.612721, gen_loss 1.71569
Epoch 1/1, Step 2260, dis_loss: 1.36474, gen_loss 0.940059
Epoch 1/1, Step 2280, dis_loss: 1.37852, gen_loss 0.759865
Epoch 1/1, Step 2300, dis_loss: 1.35705, gen_loss 0.856798
Epoch 1/1, Step 2320, dis_loss: 1.35433, gen_loss 0.839781
Epoch 1/1, Step 2340, dis_loss: 1.34661, gen_loss 0.744017
Epoch 1/1, Step 2360, dis_loss: 1.35057, gen_loss 0.816077
Epoch 1/1, Step 2380, dis_loss: 1.33703, gen_loss 0.805057
Epoch 1/1, Step 2400, dis_loss: 1.41153, gen_loss 0.697576
Epoch 1/1, Step 2420, dis_loss: 1.38087, gen_loss 0.803436
Epoch 1/1, Step 2440, dis_loss: 1.37642, gen_loss 0.822833
Epoch 1/1, Step 2460, dis_loss: 1.35384, gen_loss 0.982197
Epoch 1/1, Step 2480, dis_loss: 1.38036, gen_loss 0.853883
Epoch 1/1, Step 2500, dis_loss: 1.39317, gen_loss 0.813519
Epoch 1/1, Step 2520, dis_loss: 1.35322, gen_loss 0.944904
Epoch 1/1, Step 2540, dis_loss: 1.39753, gen_loss 0.860643
Epoch 1/1, Step 2560, dis_loss: 1.39169, gen_loss 0.836467
Epoch 1/1, Step 2580, dis_loss: 1.35827, gen_loss 1.02342
Epoch 1/1, Step 2600, dis_loss: 1.36641, gen_loss 0.852743
Epoch 1/1, Step 2620, dis_loss: 1.35584, gen_loss 0.722088
Epoch 1/1, Step 2640, dis_loss: 1.37232, gen_loss 0.812013
Epoch 1/1, Step 2660, dis_loss: 1.35039, gen_loss 0.816968
Epoch 1/1, Step 2680, dis_loss: 1.39446, gen_loss 0.768618
Epoch 1/1, Step 2700, dis_loss: 1.40248, gen_loss 0.813901
Epoch 1/1, Step 2720, dis_loss: 1.38127, gen_loss 0.803048
Epoch 1/1, Step 2740, dis_loss: 1.38817, gen_loss 0.786241
Epoch 1/1, Step 2760, dis_loss: 1.37852, gen_loss 0.836688
Epoch 1/1, Step 2780, dis_loss: 1.36505, gen_loss 0.709948
Epoch 1/1, Step 2800, dis_loss: 1.37753, gen_loss 0.796736
Epoch 1/1, Step 2820, dis_loss: 1.37335, gen_loss 0.82065
Epoch 1/1, Step 2840, dis_loss: 1.37812, gen_loss 0.760913
Epoch 1/1, Step 2860, dis_loss: 1.37183, gen_loss 0.794736
Epoch 1/1, Step 2880, dis_loss: 1.3795, gen_loss 0.726403
Epoch 1/1, Step 2900, dis_loss: 1.37347, gen_loss 0.852651
Epoch 1/1, Step 2920, dis_loss: 1.38831, gen_loss 0.780388
Epoch 1/1, Step 2940, dis_loss: 1.35278, gen_loss 0.890515
Epoch 1/1, Step 2960, dis_loss: 1.36112, gen_loss 0.764247
Epoch 1/1, Step 2980, dis_loss: 1.35621, gen_loss 0.71482
Epoch 1/1, Step 3000, dis_loss: 1.37163, gen_loss 0.820983
Epoch 1/1, Step 3020, dis_loss: 1.3662, gen_loss 0.775073
Epoch 1/1, Step 3040, dis_loss: 1.34724, gen_loss 0.740933
Epoch 1/1, Step 3060, dis_loss: 1.36184, gen_loss 0.862401
Epoch 1/1, Step 3080, dis_loss: 1.37385, gen_loss 0.822385
Epoch 1/1, Step 3100, dis_loss: 1.36304, gen_loss 0.799195
Epoch 1/1, Step 3120, dis_loss: 1.36633, gen_loss 0.813632
Epoch 1/1, Step 3140, dis_loss: 1.38165, gen_loss 0.709101
Epoch 1/1, Step 3160, dis_loss: 1.37743, gen_loss 0.718992
Epoch 1/1, Step 3180, dis_loss: 1.38546, gen_loss 0.827922
Epoch 1/1, Step 3200, dis_loss: 1.37625, gen_loss 0.810684
Epoch 1/1, Step 3220, dis_loss: 1.38481, gen_loss 0.861319
Epoch 1/1, Step 3240, dis_loss: 1.33774, gen_loss 0.875542
Epoch 1/1, Step 3260, dis_loss: 1.33931, gen_loss 0.867725
Epoch 1/1, Step 3280, dis_loss: 1.3712, gen_loss 0.891036
Epoch 1/1, Step 3300, dis_loss: 1.36426, gen_loss 0.841219
Epoch 1/1, Step 3320, dis_loss: 1.36619, gen_loss 0.764983
Epoch 1/1, Step 3340, dis_loss: 1.35805, gen_loss 0.872024
Epoch 1/1, Step 3360, dis_loss: 1.38218, gen_loss 0.820358
Epoch 1/1, Step 3380, dis_loss: 1.33967, gen_loss 0.922751
Epoch 1/1, Step 3400, dis_loss: 1.36202, gen_loss 0.797993
Epoch 1/1, Step 3420, dis_loss: 1.37105, gen_loss 0.798561
Epoch 1/1, Step 3440, dis_loss: 1.3778, gen_loss 0.862051
Epoch 1/1, Step 3460, dis_loss: 1.35444, gen_loss 0.782524
Epoch 1/1, Step 3480, dis_loss: 1.36177, gen_loss 0.721053
Epoch 1/1, Step 3500, dis_loss: 1.36577, gen_loss 0.797433
Epoch 1/1, Step 3520, dis_loss: 1.35613, gen_loss 0.894394
Epoch 1/1, Step 3540, dis_loss: 1.38949, gen_loss 0.758349
Epoch 1/1, Step 3560, dis_loss: 1.37008, gen_loss 0.779726
Epoch 1/1, Step 3580, dis_loss: 1.35841, gen_loss 0.845033
Epoch 1/1, Step 3600, dis_loss: 1.38065, gen_loss 0.867651
Epoch 1/1, Step 3620, dis_loss: 1.36041, gen_loss 0.933675
Epoch 1/1, Step 3640, dis_loss: 1.37297, gen_loss 0.776065
Epoch 1/1, Step 3660, dis_loss: 1.36976, gen_loss 0.843068
Epoch 1/1, Step 3680, dis_loss: 1.34044, gen_loss 0.834506
Epoch 1/1, Step 3700, dis_loss: 1.3533, gen_loss 0.834796
Epoch 1/1, Step 3720, dis_loss: 1.38185, gen_loss 0.875656
Epoch 1/1, Step 3740, dis_loss: 1.34288, gen_loss 0.749829
Epoch 1/1, Step 3760, dis_loss: 1.38183, gen_loss 0.854742
Epoch 1/1, Step 3780, dis_loss: 1.38062, gen_loss 0.760274
Epoch 1/1, Step 3800, dis_loss: 1.32501, gen_loss 0.859197
Epoch 1/1, Step 3820, dis_loss: 1.38457, gen_loss 0.891808
Epoch 1/1, Step 3840, dis_loss: 1.39197, gen_loss 0.818407
Epoch 1/1, Step 3860, dis_loss: 1.36585, gen_loss 0.867746
Epoch 1/1, Step 3880, dis_loss: 1.37233, gen_loss 0.856306
Epoch 1/1, Step 3900, dis_loss: 1.3886, gen_loss 0.834012
Epoch 1/1, Step 3920, dis_loss: 1.36618, gen_loss 0.816207
Epoch 1/1, Step 3940, dis_loss: 1.3731, gen_loss 0.818411
Epoch 1/1, Step 3960, dis_loss: 1.35177, gen_loss 0.754937
Epoch 1/1, Step 3980, dis_loss: 1.35972, gen_loss 0.837166
Epoch 1/1, Step 4000, dis_loss: 1.37669, gen_loss 0.803074
Epoch 1/1, Step 4020, dis_loss: 1.32535, gen_loss 0.904274
Epoch 1/1, Step 4040, dis_loss: 1.37655, gen_loss 0.873708
Epoch 1/1, Step 4060, dis_loss: 1.34274, gen_loss 1.01038
Epoch 1/1, Step 4080, dis_loss: 1.35647, gen_loss 0.796736
Epoch 1/1, Step 4100, dis_loss: 1.36571, gen_loss 0.834627
Epoch 1/1, Step 4120, dis_loss: 1.37045, gen_loss 0.807493
Epoch 1/1, Step 4140, dis_loss: 1.34883, gen_loss 0.811732
Epoch 1/1, Step 4160, dis_loss: 1.37583, gen_loss 0.77441
Epoch 1/1, Step 4180, dis_loss: 1.38145, gen_loss 0.904847
Epoch 1/1, Step 4200, dis_loss: 1.37219, gen_loss 0.783353
Epoch 1/1, Step 4220, dis_loss: 1.36436, gen_loss 0.770588
Epoch 1/1, Step 4240, dis_loss: 1.37067, gen_loss 0.791252
Epoch 1/1, Step 4260, dis_loss: 1.37095, gen_loss 0.844193
Epoch 1/1, Step 4280, dis_loss: 1.38011, gen_loss 0.828691
Epoch 1/1, Step 4300, dis_loss: 1.36852, gen_loss 0.895499
Epoch 1/1, Step 4320, dis_loss: 1.38189, gen_loss 0.827894
Epoch 1/1, Step 4340, dis_loss: 1.36165, gen_loss 0.877406
Epoch 1/1, Step 4360, dis_loss: 1.35248, gen_loss 0.810888
Epoch 1/1, Step 4380, dis_loss: 1.33868, gen_loss 0.765135
Epoch 1/1, Step 4400, dis_loss: 1.36308, gen_loss 0.900666
Epoch 1/1, Step 4420, dis_loss: 1.38002, gen_loss 0.7384
Epoch 1/1, Step 4440, dis_loss: 1.371, gen_loss 0.778295
Epoch 1/1, Step 4460, dis_loss: 1.3663, gen_loss 0.760029
Epoch 1/1, Step 4480, dis_loss: 1.38376, gen_loss 0.829789
Epoch 1/1, Step 4500, dis_loss: 1.39586, gen_loss 0.842515
Epoch 1/1, Step 4520, dis_loss: 1.37409, gen_loss 0.875016
Epoch 1/1, Step 4540, dis_loss: 1.38111, gen_loss 0.754356
Epoch 1/1, Step 4560, dis_loss: 1.32868, gen_loss 0.963527
Epoch 1/1, Step 4580, dis_loss: 1.36616, gen_loss 0.801136
Epoch 1/1, Step 4600, dis_loss: 1.35772, gen_loss 0.747165
Epoch 1/1, Step 4620, dis_loss: 1.3392, gen_loss 0.766583
Epoch 1/1, Step 4640, dis_loss: 1.36868, gen_loss 0.812309
Epoch 1/1, Step 4660, dis_loss: 1.38123, gen_loss 0.891382
Epoch 1/1, Step 4680, dis_loss: 1.37301, gen_loss 0.750403
Epoch 1/1, Step 4700, dis_loss: 1.37163, gen_loss 0.75471
Epoch 1/1, Step 4720, dis_loss: 1.35228, gen_loss 0.922855
Epoch 1/1, Step 4740, dis_loss: 1.37053, gen_loss 0.738562
Epoch 1/1, Step 4760, dis_loss: 1.37073, gen_loss 0.904715
Epoch 1/1, Step 4780, dis_loss: 1.36909, gen_loss 0.801306
Epoch 1/1, Step 4800, dis_loss: 1.37077, gen_loss 0.793875
Epoch 1/1, Step 4820, dis_loss: 1.34502, gen_loss 0.787413
Epoch 1/1, Step 4840, dis_loss: 1.36628, gen_loss 0.811857
Epoch 1/1, Step 4860, dis_loss: 1.38542, gen_loss 0.813228
Epoch 1/1, Step 4880, dis_loss: 1.36314, gen_loss 0.830825
Epoch 1/1, Step 4900, dis_loss: 1.35983, gen_loss 0.833032
Epoch 1/1, Step 4920, dis_loss: 1.38075, gen_loss 0.770127
Epoch 1/1, Step 4940, dis_loss: 1.3729, gen_loss 0.825237
Epoch 1/1, Step 4960, dis_loss: 1.34175, gen_loss 0.715535
Epoch 1/1, Step 4980, dis_loss: 1.37518, gen_loss 0.735889
Epoch 1/1, Step 5000, dis_loss: 1.36598, gen_loss 0.722971
Epoch 1/1, Step 5020, dis_loss: 1.36038, gen_loss 0.803558
Epoch 1/1, Step 5040, dis_loss: 1.3669, gen_loss 0.787333
Epoch 1/1, Step 5060, dis_loss: 1.36954, gen_loss 0.78071
Epoch 1/1, Step 5080, dis_loss: 1.36208, gen_loss 0.874275
Epoch 1/1, Step 5100, dis_loss: 1.3762, gen_loss 0.845329
Epoch 1/1, Step 5120, dis_loss: 1.36931, gen_loss 0.822492
Epoch 1/1, Step 5140, dis_loss: 1.36364, gen_loss 0.787925
Epoch 1/1, Step 5160, dis_loss: 1.32191, gen_loss 0.890915
Epoch 1/1, Step 5180, dis_loss: 1.36501, gen_loss 0.78865
Epoch 1/1, Step 5200, dis_loss: 1.38882, gen_loss 0.869345
Epoch 1/1, Step 5220, dis_loss: 1.37601, gen_loss 0.801853
Epoch 1/1, Step 5240, dis_loss: 1.35464, gen_loss 0.847825
Epoch 1/1, Step 5260, dis_loss: 1.37362, gen_loss 0.858664
Epoch 1/1, Step 5280, dis_loss: 1.34154, gen_loss 0.814672
Epoch 1/1, Step 5300, dis_loss: 1.36094, gen_loss 0.842203
Epoch 1/1, Step 5320, dis_loss: 1.37476, gen_loss 0.757341
Epoch 1/1, Step 5340, dis_loss: 1.3818, gen_loss 0.78415
Epoch 1/1, Step 5360, dis_loss: 1.36225, gen_loss 0.894177
Epoch 1/1, Step 5380, dis_loss: 1.3853, gen_loss 0.767133
Epoch 1/1, Step 5400, dis_loss: 1.35644, gen_loss 0.867297
Epoch 1/1, Step 5420, dis_loss: 1.36836, gen_loss 0.903139
Epoch 1/1, Step 5440, dis_loss: 1.36046, gen_loss 0.804741
Epoch 1/1, Step 5460, dis_loss: 1.36289, gen_loss 0.809154
Epoch 1/1, Step 5480, dis_loss: 1.35777, gen_loss 0.919722
Epoch 1/1, Step 5500, dis_loss: 1.36743, gen_loss 0.838796
Epoch 1/1, Step 5520, dis_loss: 1.36896, gen_loss 0.901017
Epoch 1/1, Step 5540, dis_loss: 1.38078, gen_loss 0.766844
Epoch 1/1, Step 5560, dis_loss: 1.37486, gen_loss 0.803952
Epoch 1/1, Step 5580, dis_loss: 1.37936, gen_loss 0.777562
Epoch 1/1, Step 5600, dis_loss: 1.35686, gen_loss 0.785992
Epoch 1/1, Step 5620, dis_loss: 1.38042, gen_loss 0.846305
Epoch 1/1, Step 5640, dis_loss: 1.36874, gen_loss 0.871215
Epoch 1/1, Step 5660, dis_loss: 1.35343, gen_loss 0.80997
Epoch 1/1, Step 5680, dis_loss: 1.35799, gen_loss 0.817325
Epoch 1/1, Step 5700, dis_loss: 1.37164, gen_loss 0.847513
Epoch 1/1, Step 5720, dis_loss: 1.3708, gen_loss 0.779688
Epoch 1/1, Step 5740, dis_loss: 1.37137, gen_loss 0.77208
Epoch 1/1, Step 5760, dis_loss: 1.37448, gen_loss 0.798169
Epoch 1/1, Step 5780, dis_loss: 1.35298, gen_loss 0.866097
Epoch 1/1, Step 5800, dis_loss: 1.38308, gen_loss 0.736609
Epoch 1/1, Step 5820, dis_loss: 1.37633, gen_loss 0.765545
Epoch 1/1, Step 5840, dis_loss: 1.36981, gen_loss 0.887428
Epoch 1/1, Step 5860, dis_loss: 1.36602, gen_loss 0.731579
Epoch 1/1, Step 5880, dis_loss: 1.35895, gen_loss 0.777939
Epoch 1/1, Step 5900, dis_loss: 1.34566, gen_loss 0.756104
Epoch 1/1, Step 5920, dis_loss: 1.36116, gen_loss 0.852719
Epoch 1/1, Step 5940, dis_loss: 1.37513, gen_loss 0.797669
Epoch 1/1, Step 5960, dis_loss: 1.37291, gen_loss 0.784348
Epoch 1/1, Step 5980, dis_loss: 1.35092, gen_loss 0.766691
Epoch 1/1, Step 6000, dis_loss: 1.3639, gen_loss 0.716902
Epoch 1/1, Step 6020, dis_loss: 1.36575, gen_loss 0.745345
Epoch 1/1, Step 6040, dis_loss: 1.36585, gen_loss 0.787063
Epoch 1/1, Step 6060, dis_loss: 1.36062, gen_loss 0.803145
Epoch 1/1, Step 6080, dis_loss: 1.35878, gen_loss 0.833981
Epoch 1/1, Step 6100, dis_loss: 1.36689, gen_loss 0.826147
Epoch 1/1, Step 6120, dis_loss: 1.37726, gen_loss 0.820392
Epoch 1/1, Step 6140, dis_loss: 1.36869, gen_loss 0.803292
Epoch 1/1, Step 6160, dis_loss: 1.35819, gen_loss 0.873097
Epoch 1/1, Step 6180, dis_loss: 1.35137, gen_loss 0.788968
Epoch 1/1, Step 6200, dis_loss: 1.37147, gen_loss 0.853982
Epoch 1/1, Step 6220, dis_loss: 1.39304, gen_loss 0.843846
Epoch 1/1, Step 6240, dis_loss: 1.34461, gen_loss 0.781638
Epoch 1/1, Step 6260, dis_loss: 1.352, gen_loss 0.821324
Epoch 1/1, Step 6280, dis_loss: 1.38499, gen_loss 0.823584
Epoch 1/1, Step 6300, dis_loss: 1.35584, gen_loss 0.78966
Epoch 1/1, Step 6320, dis_loss: 1.38092, gen_loss 0.814936

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]:
 
In [ ]: